标准化流是可易处理的密度模型,可以近似复杂的目标分布,例如物理系统的玻尔兹曼分布。但是,当前的训练流量要么具有寻求模式的行为,要么使用昂贵的MCMC模拟事先生成的目标样本,要么使用具有很高差异的随机损失。为了避免这些问题,我们以退火重要性采样(AIS)增强流量,并最大程度地减少覆盖$ \ alpha $ -divergence的质量,并使用$ \ alpha = 2 $,从而最大程度地减少了重要性的重量差异。我们的方法是流动性Bootstrap(Fab),使用AIS在流动较差的目标区域中生成样品,从而促进了新模式的发现。我们以AIS的最小差异分布来定位,以通过重要性抽样来估计$ \ alpha $ -Divergence。我们还使用优先的缓冲区来存储和重复使用AIS样本。这两个功能显着提高了Fab的性能。我们将FAB应用于复杂的多模式目标,并表明我们可以在以前的方法失败的情况下非常准确地近似它们。据我们所知,我们是第一个仅使用非均衡目标密度学习丙氨酸二肽分子的玻璃体分布,而无需通过分子动力学(MD)模拟生成的样品:FAB与通过最大可能性训练更好的效果,而不是通过最大可能性产生的结果。在MD样品上使用100倍的目标评估。在重新获得重要权重的样品后,我们获得了与地面真相几乎相同的二面角的无偏直方图。
translated by 谷歌翻译
表示学习的目的之一是恢复生成数据的原始潜在代码,这是需要其他信息或归纳偏见的任务。最近提出的一种称为独立机制分析(IMA)的方法假定每个潜在来源应独立影响观察到的混合物,补充标准的非线性独立组件分析,并从独立的因果机制原理中汲取灵感。尽管在理论和实验中表明IMA有助于恢复真正的潜在潜在,但该方法的性能仅在确切满足建模假设时才得以表征。在这里,我们测试了该方法对违反基本假设的鲁棒性。我们发现,基于IMA的正规化恢复真实来源的好处扩展到与IMA原理不同程度的混合功能,而标准的正则化器不提供相同的优点。此外,我们表明,未注册的最大似然恢复了混合功能,这些功能系统地偏离了IMA原理,并提供了阐明基于IMA的正则化的好处的论点。
translated by 谷歌翻译
两样本测试在统计和机器学习中很重要,既是科学发现的工具,又是检测分布变化的工具。这导致了许多复杂的测试程序的开发,超出了标准监督学习框架,它们的用法可能需要有关两样本测试的专业知识。我们使用一个简单的测试,该测试将证人功能的平均差异作为测试统计量,并证明最小化平方损失会导致具有最佳测试能力的证人。这使我们能够利用汽车的最新进步。如果没有任何用户对当前问题的输入,并在我们所有实验中使用相同的方法,我们的AutoML两样本测试可以在各种分配转移基准以及挑战两样本测试问题上实现竞争性能。我们在Python软件包AUTOTST中提供了Automl两样本测试的实现。
translated by 谷歌翻译
归一化流量是灵活的,参数化分布,可用于通过重要性采样从难治性分布中的预期近似。然而,目前的基于流动的方法受到挑战目标的限制,其中它们患有模式寻求行为或在训练损失中的高方差,或依赖于目标分布的样本,这可能不可用。为了解决这些挑战,我们将流量与退火重点采样(AIS)相结合,同时使用$ \ Alpha $ - 在新颖的培训程序中使用$ \ Alpha $ - 作为我们的目标,在培训程序Fab(Flow AIS Bootstrap)中。因此,流动和AI以自动启动方式彼此改进。我们展示了FAB可以用于对复杂的目标分布产生准确的近似,包括Boltzmann分布,在前一种基于流基的方法失败的问题中。
translated by 谷歌翻译
Charisma is considered as one's ability to attract and potentially also influence others. Clearly, there can be considerable interest from an artificial intelligence's (AI) perspective to provide it with such skill. Beyond, a plethora of use cases opens up for computational measurement of human charisma, such as for tutoring humans in the acquisition of charisma, mediating human-to-human conversation, or identifying charismatic individuals in big social data. A number of models exist that base charisma on various dimensions, often following the idea that charisma is given if someone could and would help others. Examples include influence (could help) and affability (would help) in scientific studies or power (could help), presence, and warmth (both would help) as a popular concept. Modelling high levels in these dimensions for humanoid robots or virtual agents, seems accomplishable. Beyond, also automatic measurement appears quite feasible with the recent advances in the related fields of Affective Computing and Social Signal Processing. Here, we, thereforem present a blueprint for building machines that can appear charismatic, but also analyse the charisma of others. To this end, we first provide the psychological perspective including different models of charisma and behavioural cues of it. We then switch to conversational charisma in spoken language as an exemplary modality that is essential for human-human and human-computer conversations. The computational perspective then deals with the recognition and generation of charismatic behaviour by AI. This includes an overview of the state of play in the field and the aforementioned blueprint. We then name exemplary use cases of computational charismatic skills before switching to ethical aspects and concluding this overview and perspective on building charisma-enabled AI.
translated by 谷歌翻译
There are two important things in science: (A) Finding answers to given questions, and (B) Coming up with good questions. Our artificial scientists not only learn to answer given questions, but also continually invent new questions, by proposing hypotheses to be verified or falsified through potentially complex and time-consuming experiments, including thought experiments akin to those of mathematicians. While an artificial scientist expands its knowledge, it remains biased towards the simplest, least costly experiments that still have surprising outcomes, until they become boring. We present an empirical analysis of the automatic generation of interesting experiments. In the first setting, we investigate self-invented experiments in a reinforcement-providing environment and show that they lead to effective exploration. In the second setting, pure thought experiments are implemented as the weights of recurrent neural networks generated by a neural experiment generator. Initially interesting thought experiments may become boring over time.
translated by 谷歌翻译
Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions. A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs. This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to stochastic control for industrial applications. Although feasible, these deep learning methods are still constrained by training time and memory. Tackling these shortcomings, Tensor Neural Networks (TNN) demonstrate that they can provide significant parameter savings while attaining the same accuracy as compared to the classical Dense Neural Network (DNN). In addition, we also show how TNN can be trained faster than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network Initializer (TNN Init), a weight initialization scheme that leads to faster convergence with smaller variance for an equivalent parameter count as compared to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic PDE associated with the Heston model, which is widely used in financial pricing theory.
translated by 谷歌翻译
A statistical ensemble of neural networks can be described in terms of a quantum field theory (NN-QFT correspondence). The infinite-width limit is mapped to a free field theory, while finite N corrections are mapped to interactions. After reviewing the correspondence, we will describe how to implement renormalization in this context and discuss preliminary numerical results for translation-invariant kernels. A major outcome is that changing the standard deviation of the neural network weight distribution corresponds to a renormalization flow in the space of networks.
translated by 谷歌翻译
We present an automatic method for annotating images of indoor scenes with the CAD models of the objects by relying on RGB-D scans. Through a visual evaluation by 3D experts, we show that our method retrieves annotations that are at least as accurate as manual annotations, and can thus be used as ground truth without the burden of manually annotating 3D data. We do this using an analysis-by-synthesis approach, which compares renderings of the CAD models with the captured scene. We introduce a 'cloning procedure' that identifies objects that have the same geometry, to annotate these objects with the same CAD models. This allows us to obtain complete annotations for the ScanNet dataset and the recent ARKitScenes dataset.
translated by 谷歌翻译
This article presents a novel review of Active SLAM (A-SLAM) research conducted in the last decade. We discuss the formulation, application, and methodology applied in A-SLAM for trajectory generation and control action selection using information theory based approaches. Our extensive qualitative and quantitative analysis highlights the approaches, scenarios, configurations, types of robots, sensor types, dataset usage, and path planning approaches of A-SLAM research. We conclude by presenting the limitations and proposing future research possibilities. We believe that this survey will be helpful to researchers in understanding the various methods and techniques applied to A-SLAM formulation.
translated by 谷歌翻译